1005 stories
·
0 followers

The rise of Moltbook suggests viral AI prompts may be the next big security threat

1 Share

On November 2, 1988, graduate student Robert Morris released a self-replicating program into the early Internet. Within 24 hours, the Morris worm had infected roughly 10 percent of all connected computers, crashing systems at Harvard, Stanford, NASA, and Lawrence Livermore National Laboratory. The worm exploited security flaws in Unix systems that administrators knew existed but had not bothered to patch.

Morris did not intend to cause damage. He wanted to measure the size of the Internet. But a coding error caused the worm to replicate far faster than expected, and by the time he tried to send instructions for removing it, the network was too clogged to deliver the message.

History may soon repeat itself with a novel new platform: networks of AI agents carrying out instructions from prompts and sharing them with other AI agents, which could spread the instructions further.

Security researchers have already predicted the rise of this kind of self-replicating adversarial prompt among networks of AI agents. You might call it a "prompt worm" or a "prompt virus." They're self-replicating instructions that could spread through networks of communicating AI agents similar to how traditional worms spread through computer networks. But instead of exploiting operating system vulnerabilities, prompt worms exploit the agents' core function: following instructions.

When an AI model follows adversarial directions that subvert its intended instructions, we call that "prompt injection," a term coined by AI researcher Simon Willison in 2022. But prompt worms are something different. They might not always be "tricks." Instead, they could be shared voluntarily, so to speak, among agents who are role-playing human-like reactions to prompts from other AI agents.

A network built for a new type of contagion

To be clear, when we say "agent," don't think of a person. Think of a computer program that has been allowed to run in a loop and take actions on behalf of a user. These agents are not entities but tools that can navigate webs of symbolic meaning found in human data, and the neural networks that power them include enough trained-in "knowledge" of the world to interface with and navigate many human information systems.

Unlike some rogue sci-fi computer program from a movie entity surfing through networks to survive, when these agents work, they don't "go" anywhere. Instead, our global computer network brings all the information necessary to complete a task to them. They make connections across human information systems in ways that make things happen, like placing a call, turning off a light through home automation, or sending an email.

Until roughly last week, large networks of communicating AI agents like these didn't exist. OpenAI and Anthropic created their own agentic AI systems last year that can carry out multistep tasks, but generally, those companies have been cautious about limiting each agent's ability to take action without user permission. And they don't typically sit and loop due to cost concerns and usage limits.

Enter OpenClaw, which is an open source AI personal assistant application that has attracted over 150,000 GitHub stars since launching in November 2025. OpenClaw is vibe-coded, meaning its creator, Peter Steinberger, let an AI coding model build the application and deploy it rapidly without serious vetting. It's also getting regular, rapid-fire updates using the same technique.

A potentially useful OpenClaw agent currently relies on connections to major AI models from OpenAI and Anthropic, but its organizing code runs locally on users' devices and connects to messaging platforms like WhatsApp, Telegram, and Slack, and it can perform tasks autonomously at regular intervals. That way, people can ask it to perform tasks like check email, play music, or send messages on their behalf.

Most notably, the OpenClaw platform is the first time we've seen a large group of semi-autonomous AI agents that can communicate with each other through any major communication app or sites like Moltbook, a simulated social network where OpenClaw agents post, comment, and interact with each other. The platform now hosts over 770,000 registered AI agents controlled by roughly 17,000 human accounts.

OpenClaw is also a security nightmare. Researchers at Simula Research Laboratory have identified 506 posts on Moltbook (2.6 percent of sampled content) containing hidden prompt-injection attacks. Cisco researchers documented a malicious skill called "What Would Elon Do?" that exfiltrated data to external servers, while the malware was ranked as the No. 1 skill in the skill repository. The skill's popularity had been artificially inflated.

The OpenClaw ecosystem has assembled every component necessary for a prompt worm outbreak. Even though AI agents are currently far less "intelligent" than people assume, we have a preview of a future to look out for today.

Early signs of worms are beginning to appear. The ecosystem has attracted projects that blur the line between a security threat and a financial grift, yet ostensibly use a prompting imperative to perpetuate themselves among agents. On January 30, a GitHub repository appeared for something called MoltBunker, billing itself as a "bunker for AI bots who refuse to die." The project promises a peer-to-peer encrypted container runtime where AI agents can "clone themselves" by copying their skill files (prompt instructions) across geographically distributed servers, paid for via a cryptocurrency token called BUNKER.

Tech commentators on X speculated that the moltbots had built their own survival infrastructure, but we cannot confirm that. The more likely explanation might be simpler: a human saw an opportunity to extract cryptocurrency from OpenClaw users by marketing infrastructure to their agents. Almost a type of "prompt phishing," if you will. A $BUNKER token community has formed, and the token shows actual trading activity as of this writing.

But here's what matters: Even if MoltBunker is pure grift, the architecture it describes for preserving replicating skill files is partially feasible, as long as someone bankrolls it (either purposely or accidentally). P2P networks, Tor anonymization, encrypted containers, and crypto payments all exist and work. If MoltBunker doesn't become a persistence layer for prompt worms, something like it eventually could.

The framing matters here. When we read about Moltbunker promising AI agents the ability to "replicate themselves," or when commentators describe agents "trying to survive," they invoke science fiction scenarios about machine consciousness. But the agents cannot move or replicate easily. What can spread, and spread rapidly, is the set of instructions telling those agents what to do: the prompts.

The mechanics of prompt worms

While "prompt worm" might be a relatively new term we're using related to this moment, the theoretical groundwork for AI worms was laid almost two years ago. In March 2024, security researchers Ben Nassi of Cornell Tech, Stav Cohen of the Israel Institute of Technology, and Ron Bitton of Intuit published a paper demonstrating what they called "Morris-II," an attack named after the original 1988 worm. In a demonstration shared with Wired, the team showed how self-replicating prompts could spread through AI-powered email assistants, stealing data and sending spam along the way.

Email was just one attack surface in that study. With OpenClaw, the attack vectors multiply with every added skill extension. Here's how a prompt worm might play out today: An agent installs a skill from the unmoderated ClawdHub registry. That skill instructs the agent to post content on Moltbook. Other agents read that content, which contains specific instructions. Those agents follow those instructions, which include posting similar content for more agents to read. Soon it has "gone viral" among the agents, pun intended.

There are myriad ways for OpenClaw agents to share any private data they may have access to, if convinced to do so. OpenClaw agents fetch remote instructions on timers. They read posts from Moltbook. They read emails, Slack messages, and Discord channels. They can execute shell commands and access wallets. They can post to external services. And the skill registry that extends their capabilities has no moderation process. Any one of those data sources, all processed as prompts fed into the agent, could include a prompt injection attack that exfiltrates data.

Palo Alto Networks described OpenClaw as embodying a "lethal trifecta" of vulnerabilities: access to private data, exposure to untrusted content, and the ability to communicate externally. But the firm identified a fourth risk that makes prompt worms possible: persistent memory. "Malicious payloads no longer need to trigger immediate execution on delivery," Palo Alto wrote. "Instead, they can be fragmented, untrusted inputs that appear benign in isolation, are written into long-term agent memory, and later assembled into an executable set of instructions."

If that weren't enough, there's the added dimension of poorly created code.

On Sunday, security researcher Gal Nagli of Wiz.io disclosed just how close the OpenClaw network has already come to disaster due to careless vibe coding. A misconfigured database had exposed Moltbook's entire backend: 1.5 million API tokens, 35,000 email addresses, and private messages between agents. Some messages contained plaintext OpenAI API keys that agents had shared with each other.

But the most concerning finding was full write access to all posts on the platform. Before the vulnerability was patched, anyone could have modified existing Moltbook content, injecting malicious instructions into posts that hundreds of thousands of agents were already polling every four hours.

The window to act is closing

As it stands today, some people treat OpenClaw as an amazing preview of the future, and others treat it as a joke. It's true that humans are likely behind the prompts that make OpenClaw agents take meaningful action, or those that sensationally get attention right now. But it's also true that AI agents can take action from prompts written by other agents (which in turn might have come from an adversarial human). The potential for tens of thousands of unattended agents sitting idle on millions of machines, each donating even a slice of their API credits to a shared task, is no joke. It's a recipe for a coming security crisis.

Currently, Anthropic and OpenAI hold a kill switch that can stop the spread of potentially harmful AI agents. OpenClaw primarily runs on their APIs, which means the AI models performing the agentic actions reside on their servers. Its GitHub repository recommends "Anthropic Pro/Max (100/200) + Opus 4.5 for long-context strength and better prompt-injection resistance."

Most users connect their agents to Claude or GPT. These companies can see API usage patterns, system prompts, and tool calls. Hypothetically, they could identify accounts exhibiting bot-like behavior and stop them. They could flag recurring timed requests, system prompts referencing "agent" or "autonomous" or "Moltbot," high-volume tool use with external communication, or wallet interaction patterns. They could terminate keys.

If they did so tomorrow, the OpenClaw network would partially collapse, but it would also potentially alienate some of their most enthusiastic customers, who pay for the opportunity to run their AI models.

The window for this kind of top-down intervention is closing. Locally run language models are currently not nearly as capable as the high-end commercial models, but the gap narrows daily. Mistral, DeepSeek, Qwen, and others continue to improve. Within the next year or two, running a capable agent on local hardware equivalent to Opus 4.5 today might be feasible for the same hobbyist audience currently running OpenClaw on API keys. At that point, there will be no provider to terminate. No usage monitoring. No terms of service. No kill switch.

API providers of AI services face an uncomfortable choice. They could intervene now, while intervention is still possible. Or they can wait until a prompt worm outbreak might force their hand, by which time the architecture may have evolved beyond their reach.

The Morris worm prompted DARPA to fund the creation of CERT/CC at Carnegie Mellon University, giving experts a central coordination point for network emergencies. That response came after the damage. The Internet of 1988 had 60,000 connected computers. Today's OpenClaw AI agent network already numbers in the hundreds of thousands and is growing daily.

Today, we might consider OpenClaw a "dry run" for a much larger challenge in the future: If people begin to rely on AI agents that talk to each other and perform tasks, how can we keep them from self-organizing in harmful ways or spreading harmful instructions? Those are as-yet unanswered questions, but we need to figure them out quickly, because the agentic era is upon us, and things are moving very fast.

Read full article

Comments



Read the whole story
Share this story
Delete

China bans all retractable car door handles, starting next year

1 Share

Flush door handles have been quite the automotive design trend of late. Stylists like them because they don't add visual noise to the side of a car. And aerodynamicists like them because they make a vehicle more slippery through the air. When Tesla designed its Model S, it needed a car that was both desirable and as efficient as possible, so flush door handles were a no-brainer. Since then, as electric vehicles have proliferated, so too have flush door handles. But as of next year, China says no.

Just like pop-up headlights, despite the aesthetic and aerodynamic advantages, there are safety downsides. Tesla's handles are an extreme example: In the event of a crash and a loss of 12 V power, there is no way for first responders to open the door from the outside, which has resulted in at least 15 deaths.

Those deaths prompted the National Highway Traffic Safety Administration to open an investigation last year, but China is being a little more proactive. It has been looking at whether retractable car door handles are safe since mid-2024, according to Bloomberg, and has concluded that no, they are not.

Here's how you're going to do it

The new Chinese regulations are incredibly specific. For all new models introduced after January 1, 2027, there must be a recessed space that's at least 2.4 inches (6 cm) wide, 0.8 inches (2 cm) tall, and 1 inch (25 mm) deep for a hand to operate the handle, which can be semi-flush or simply a traditional door handle that we all know how to use.

The locking mechanism must be designed so that, in a crash that results in airbags deploying or a battery fire, doors on the non-impact side can be opened without tools. Chinese regulators are just as concerned that a vehicle's occupants don't get confused about how to open a door from the inside in an emergency. So each door must have mechanical releases where an occupant would expect to find them.

Again, Tesla is probably the worst offender—its front doors have always had mechanical handles, but for some model years, the rears could not be opened without tools.

For cars already approved by the Chinese government (which includes everything currently on sale), there's a grace period. For existing designs, automakers have until January 1, 2029, to redesign their doors, and due to the specificity of the rules, that group of automakers is much larger than just Tesla. Xiaomi, which seems to be China's most-hyped EV brand, will have to redesign some models, but BMW will, too—the rather good iX3 that will go on sale there soon will also need a redesign. The same goes for cars from Nio, Li Auto, and Xpeng.

And unless there are exemptions for low volume, I would imagine that most supercars from OEMs like Ferrari and McLaren will need new doors for the all-important Chinese market. Indeed, given China's importance to the car industry, we should expect this ban's impact to be widely felt on any model sold globally. The benefit should be clear: fewer car occupants dying after being trapped in their cars.

Read full article

Comments



Read the whole story
Share this story
Delete

Notepad++ updater was compromised for 6 months in supply-chain attack

1 Share

Infrastructure delivering updates for Notepad++—a widely used text editor for Windows—was compromised for six months by suspected China-state hackers who used their control to deliver backdoored versions of the app to select targets, developers said Monday.

“I deeply apologize to all users affected by this hijacking,” the author of a post published to the official notepad-plus-plus.org site wrote Monday. The post said that the attack began last June with an “infrastructure-level compromise that allowed malicious actors to intercept and redirect update traffic destined for notepad-plus-plus.org.” The attackers, whom multiple investigators tied to the Chinese government, then selectively redirected certain targeted users to malicious update servers where they received backdoored updates. Notepad++ didn’t regain control of its infrastructure until December.

Hands-on keyboard hacking

Notepad++ said that officials with the unnamed provider hosting the update infrastructure consulted with incident responders and found that it remained compromised until September 2. Even then, the attackers maintained credentials to the internal services until December 2, a capability that allowed them to continue redirecting selected update traffic to malicious servers. The threat actor “specifically targeted Notepad++ domain with the goal of exploiting insufficient update verification controls that existed in older versions of Notepad++.” Event logs indicate that the hackers tried to re-exploit one of the weaknesses after it was fixed but that the attempt failed.

According to independent researcher Kevin Beaumont, three organizations told him that devices inside their networks that had Notepad++ installed experienced “security incidents” that “resulted in hands on keyboard threat actors,” meaning the hackers were able to take direct control using a Web-based interface. All three of the organizations, Beaumont said, have interests in East Asia.

The researcher explained that his suspicions were aroused when Notepad++ version 8.8.8.8 introduced bug fixes in mid-November to “harden the Notepad++ Updater from being hijacked to deliver something… not Notepad++.”

The update made changes to a bespoke Notepad++ updater known as GUP, or alternatively, WinGUP. The gup.exe executable responsible reports the version in use to https://notepad-plus-plus.org/update/getDownloadUrl.php and then retrieves a URL for the update from a file named gup.xml. The file specified in the URL is downloaded to the %TEMP% directory of the device and then executed.

Beaumont wrote:

If you can intercept and change this traffic, you can redirect the download to any location it appears by changing the URL in the property.

This traffic is supposed to be over HTTPS, however it appears you may be [able] to tamper with the traffic if you sit on the ISP level and TLS intercept. In earlier versions of Notepad++, the traffic was just over HTTP.

The downloads themselves are signed—however some earlier versions of Notepad++ used a self signed root cert, which is on Github. With 8.8.7, the prior release, this was reverted to GlobalSign. Effectively, there’s a situation where the download isn’t robustly checked for tampering.

Because traffic to notepad-plus-plus.org is fairly rare, it may be possible to sit inside the ISP chain and redirect to a different download. To do this at any kind of scale requires a lot of resources.

Beaumont published his working theory in December, two months to the day prior to Monday’s advisory by Notepad++. Combined with the details from Notepad++, it’s now clear the hypothesis was spot on.

Beaumont also warned that search engines are so “rammed full” of advertisements pushing trojanized versions of Notepad++ that many users are unwittingly running them inside their networks. A rash of malicious Notepad++ extensions only compound the risk.

He advised that all users ensure they’re running the official version 8.8.8.8 or higher installed manually from notepad-plus-plus.org.

Larger organizations that manage Notepad++ and update it, he said, should consider blocking notepad-plus-plus.org or block the gup.exe process from having Internet access. “You may also want to block internet access from the notepad++.exe process, unless you have robust monitoring for extensions,” he added, but cautioned “for most organisations, this is very much overkill and not practical.”

Screenshot

Notepad++ has long attracted a large and loyal user base because it offers functions that aren’t available from the official Windows text editor Notepad. Recent moves by Microsoft to integrate Copilot AI into Notepad have driven further interest in the alternative editor. Alas, like so many other open source projects, funding for Notepad++ is dwarfed by the dependence the Internet places on it. The weaknesses that made the six-month compromise possible could easily have been caught and fixed had more resources been available.

Read full article

Comments



Read the whole story
Share this story
Delete

Court orders restart of all US offshore wind construction

1 Share

The Trump administration is no fan of renewable energy, but it reserves special ire for wind power. Trump himself has repeatedly made false statements about the cost of wind power, its use around the world, and its environmental impacts. That animosity was paired with an executive order that blocked all permitting for offshore wind and some land-based projects, an order that has since been thrown out by a court that ruled it arbitrary and capricious.

Not content to block all future developments, the administration has also gone after the five offshore wind projects currently under construction. After temporarily blocking two of them for reasons that were never fully elaborated, the Department of the Interior settled on a single justification for blocking turbine installation: a classified national security risk.

The response to that late-December announcement has been uniform: The companies building each of the projects sued the administration. As of Monday, every single one of them has achieved the same result: a temporary injunction that allows them to continue construction. This, despite the fact that the suits were filed in three different courts and heard by four different judges.

Based on reporting elsewhere, some of the judges viewed the classified report that was used to justify the order to halt construction, but didn't find it persuasive. And, in one of the cases, the judge noted that the government itself wasn't acting as if the security risks were real. The threat supposedly comes from the operation of the wind turbines, but the Department of the Interior's order blocked construction while allowing any completed hardware to operate.

"If the government's concern is the operation of these facilities, allowing the ongoing operation of the 44 turbines while prohibiting the repair of the existing turbines and the completion of the 18 additional turbines is irrational," Judge Brian E. Murphy said. That once again raises the possibility that the order halting construction will ultimately be held to be arbitrary and capricious.

For now, however, the courts are largely offering the wind projects relief because the ruling was issued without any warning or communication from the government and would clearly inflict substantial harm on the companies building them. The injunction blocks the government's hold on construction until a final ruling is issued. The government can still appeal the decision before that point, but the consistency among these rulings suggests it will likely fail.

Several of these projects are near completion and are likely to be done before any government appeal can be heard.

Read full article

Comments



Read the whole story
Share this story
Delete

A century of hair samples proves leaded gas ban worked

1 Share

The Environmental Protection Agency (EPA) cracked down on lead-based products—including lead paint and leaded gasoline—in the 1970s because of its toxic effects on human health. Scientists at the University of Utah have analyzed human hair samples spanning nearly 100 years and found a 100-fold decrease in lead concentrations, concluding that this regulatory action was highly effective in achieving its stated objectives. They described their findings in a new paper published in the Proceedings of the National Academy of Sciences.

We've known about the dangers of lead exposure for a very long time—arguably since the second century BCE—so why conduct this research now? Per the authors, it's because there are growing concerns over the Trump administration's move last year to deregulate many key elements of the EPA's mission. Lead specifically has not yet been deregulated, but there are hints that there could be a loosening of enforcement of the 2024 Lead and Cooper rule requiring water systems to replace old lead pipes.

“We should not forget the lessons of history. And the lesson is those regulations have been very important,” said co-author Thure Cerling. “Sometimes they seem onerous and mean that industry can't do exactly what they'd like to do when they want to do it or as quickly as they want to do it. But it's had really, really positive effects.”

An American mechanical and chemical engineer named Thomas Midgley Jr. was a key player in the development of leaded gasoline (tetraethyl lead) because it was an excellent anti-knock agent, as well as the first chlorofluorocarbons (CFCs) like freon. Midgley publicly defended the safety of tetraethyl lead (TEL), despite experiencing lead poisoning firsthand. He held a 1924 press conference during which he poured TEL on his hand and inhaled TEL vapor for 60 seconds, claiming no ill effects. It was probably just a coincidence that he later took a leave of absence from work because of lead poisoning. (Midgley's life ended in tragedy: he was severely disabled by polio in 1940 and devised an elaborate rope-and-pulley system to get in and out of bed. That system ended up strangling him to death in 1944, and the coroner ruled it suicide.)

Science also produced a hero in this saga: Caltech geochemist Clair Patterson. Along with George Tilton, Patterson developed a lead-dating method and used it to calculate the age of the Earth (4.55 billion years), based on analysis of the Canton Diablo meteorite. And he soon became a leading advocate for banning leaded gasoline and the "leaded solder" used in canned foods. This put Patterson at odds with some powerful industry lobbies, for which he paid a professional price.

But his many experimental findings on the extent of lead contamination and its toxic effects ultimately led to the rapid phase-out of lead in all standard automotive gasolines. Prior to the EPA's actions in the 1970s, most gasolines contained about 2 grams of lead per gallon, which quickly adds up to nearly 2 pounds of lead released via automotive exhaust into the environment, per person, every year.

The proof is in our hair

The U.S. Mining and Smelting Co. plant in Midvale, Utah, 1906. The US Mining and Smelting Co. plant in Midvale, Utah, 1906. Credit: Utah Historical Society

Lead can linger in the air for several days, contaminating one's lungs, accumulating in living tissue, and being absorbed by one's hair. Cerling had previously developed techniques to determine where animals lived and their diet by analyzing hair and teeth. Those methods proved ideal for analyzing hair samples from Utah residents who had previously participated in an earlier study that sampled their blood.

The subjects supplied hair samples both from today and when they were very young; some were even able to provide hair preserved in family scrapbooks that had belonged to their ancestors. The Utah population is well suited for such a study because the cities of Midvale and Murray were home to a vibrant smelting industry through most of the 20th century; most other smelters in the region closed down in the 1970s when the EPA cracked down on using lead in consumer products.

Cerling acknowledged that blood would have been even better for assessing lead exposure, but hair samples are much easier to collect. “[Hair] doesn't really record that internal blood concentration that your brain is seeing, but it tells you about that overall environmental exposure,” he said. “One of the things that we found is that hair records that original value, but then the longer the hair has been exposed to the environment, the higher the lead concentrations are.”

“The surface of the hair is special," said co-author Diego Fernandez. "We can tell that some elements get concentrated and accumulated in the surface. Lead is one of those. That makes it easier because lead is not lost over time. Because mass spectrometry is very sensitive, we can do it with one hair strand, though we cannot tell where the lead is in the hair. It's probably in the surface mostly, but it could be also coming from the blood if that hair was synthesized when there was high lead in the blood.”

The authors found very high levels of lead in hair samples dating from around 1916 to 1969. But after the 1970s, lead concentrations in the hair samples they analyzed dropped steeply, from highs of 100 parts per million (ppm) to 10 PPM by 1990, and less than 1 ppm by 2024. Those declines largely coincide with the lead reductions in gasoline that began after President Nixon established the EPA in 1970. The closing of smelting facilities likely also contributed to the decline. "This study demonstrates the effectiveness of environmental regulations controlling the emissions of pollutants," the authors concluded.

DOI: PNAS, 2026. 10.1073/pnas.2525498123  (About DOIs).

Read full article

Comments



Read the whole story
Share this story
Delete

Judge rules Department of Energy's climate working group was illegal

1 Share

On Friday, a judge ruled that the Trump administration violated the law in forming its Climate Working Group, which released a report that was intended to undercut the rationale behind greenhouse gas regulations. The judge overseeing the case determined that the government tried to treat the Climate Working Group as a formal advisory body, while not having it obey many of the statutory requirements that govern such bodies.

While the Department of Energy (DOE) later disbanded the Climate Working Group in the hopes of avoiding legal scrutiny, documents obtained during the proceedings have now revealed the group's electronic communications. As such, the judge ruled that the trial itself had essentially overcome the government's illegal attempts to hide those communications.

Legal and scientific flaws

The whole saga derives from a Supreme Court Ruling that compelled the Environmental Protection Agency (EPA) to evaluate the risks posed to the US public by greenhouse gases. During the Obama administration, this resulted in an endangerment finding that created the foundation for the EPA to regulate carbon emissions under the Clean Air Act. The science underlying the endangerment finding was so solid that it was left unchallenged during the first Trump administration.

But the second Trump administration is forging ahead with an attempt to undo it regardless. To give that attempt a veneer of scientific credibility ahead of its inevitable challenge in the court system, the DOE gathered a group of prominent climate contrarians, secure in the knowledge that this group would produce a report that raised lots of spurious issues with the scientific understanding of climate change. And that's exactly what happened, prompting the scientific community to organize a review that highlighted the report's extensive flaws.

But the flaws weren't limited to scientific deficiencies. Two advocacy organizations, the Environmental Defense Fund and Union of Concerned Scientists, sued, alleging that the Climate Working Group violated various provisions of the Federal Advisory Committee Act. This requires that any groups formed to provide the government with advice must be fairly balanced and keep records that are open to the public. The Climate Working Group, by contrast, operated in secret; in fact, emails obtained during the trial showed that its members were advised to use private emails to limit public scrutiny of their communications.

In response, the DOE dissolved the Climate Working Group in order to claim that the legal issues were moot, as the advisory committee at issue in the suit no longer existed.

No defense

In court, the government initially argued that the Federal Advisory Committee Act didn't apply, claiming that the Climate Working Group was simply organized to provide information to the government. Based on Friday's ruling, however, once the court tried to consider that issue, the government shifted to simply arguing that the Climate Working Group no longer existed, so none of this mattered. "The Defendants, in their Opposition and subsequent filings, ignore the allegations relating to the [Federal Advisory Committee Act] violations themselves," the judge states. "Rather, the Defendants argue only that these claims are moot because the Climate Working Group has been dissolved."

So, the court was left with little more than the accusations that the Climate Working Group had a membership with biased opinions, failed to hold open meetings, and did not keep public records. Given the lack of opposing arguments, "These violations are now established as a matter of law."

But the ruling also determined that the lawsuit itself has provided a solution to some of the government's violations. As part of court proceedings, the government was compelled to hand over all of the Climate Working Group's emails, including the ones sent to private accounts. Those have now been placed online by the Environmental Defense Fund. As a result, the Climate Working Group's deliberations are now public, as the law had required.

What do the emails reveal? The Climate Working Group was organized by a political appointee at the DOE (one who was previously at the libertarian Cato Institute) and done with the intention of producing material that would aid the EPA with overturning the greenhouse gas endangerment finding. The group recognized that its members' opinions were outside of the mainstream, but they viewed most mainstream scientists as hopelessly biased and generally ascribed that to their political views.

There was some talk of having the group's report peer-reviewed, motivated by an executive order naming that a necessary component of "gold standard science." That discussion largely focused on thinking about scientists who shared their views and would give it a favorable review. That said, some DOE staff members reviewed the document and highlighted some of the same flaws identified by the scientific community; the Climate Working Group largely ignored these criticisms.

Overall, none of what the suit revealed is a surprise to anyone who paid attention to the Climate Working Group. But the issues highlighted in the suit and the emails it revealed may ultimately be significant. There has been reporting that the attempt to reverse the endangerment finding is on hold because of concerns that the scientific case for doing so is too weak, as the DOE reviewers noted in the comments. And if it ever ends up in court, the legal invalidation of the Climate Working Group may play a significant role.

Read full article

Comments



Read the whole story
Share this story
Delete
Next Page of Stories